Nonconvex-nonconcave minimax optimization has been the focus of intense research over the last decade due to its broad applications in machine learning and operation research. Unfortunately, most existing algorithms cannot be guaranteed to converge and always suffer from limit cycles. Their global convergence relies on certain conditions that are difficult to check, including but not limited to the global Polyak-\L{}ojasiewicz condition, the existence of a solution satisfying the weak Minty variational inequality and $\alpha$-interaction dominant condition. In this paper, we develop the first provably convergent algorithm called doubly smoothed gradient descent ascent method, which gets rid of the limit cycle without requiring any additional conditions. We further show that the algorithm has an iteration complexity of $\mathcal{O}(\epsilon^{-4})$ for finding a game stationary point, which matches the best iteration complexity of single-loop algorithms under nonconcave-concave settings. The algorithm presented here opens up a new path for designing provable algorithms for nonconvex-nonconcave minimax optimization problems.
translated by 谷歌翻译
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks. These methods achieve surprisingly good performance and are shown to be more stable than their corresponding fully fine-tuned counterparts. However, such kind of methods is still not well understood. Some natural questions arise: How does the parameter sparsity lead to promising performance? Why is the model more stable than the fully fine-tuned models? How to choose the tunable parameters? In this paper, we first categorize the existing methods into random approaches, rule-based approaches, and projection-based approaches based on how they choose which parameters to tune. Then, we show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them. We indicate that the sparsity is actually imposing a regularization on the original model by controlling the upper bound of the stability. Such stability leads to better generalization capability which has been empirically observed in a lot of recent research works. Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters. To better choose the tunable parameters, we propose a novel Second-order Approximation Method (SAM) which approximates the original problem with an analytically solvable optimization function. The tunable parameters are determined by directly optimizing the approximation function. The experimental results show that our proposed SAM model outperforms many strong baseline models and it also verifies our theoretical analysis.
translated by 谷歌翻译
NonConvex-Concave Minimax优化已经对机器学习产生了浓厚的兴趣,包括对数据分配具有稳健性,以非解释性损失,对抗性学习为单一的学习。然而,大多数现有的作品都集中在梯度散发性(GDA)变体上,这些变体只能在平滑的设置中应用。在本文中,我们考虑了一个最小问题的家族,其目标功能在最小化变量中享有非平滑复合结构,并且在最大化的变量中是凹入的。通过充分利用复合结构,我们提出了平滑的近端线性下降上升(\ textit {平滑} plda)算法,并进一步建立了其$ \ Mathcal {o}(\ epsilon^{ - 4})在平滑设置下,平滑的gda〜 \ cite {zhang2020single}。此外,在一个温和的假设下,目标函数满足单方面的kurdyka- \ l {} ojasiewicz条件,带有指数$ \ theta \ in(0,1)$,我们可以进一步将迭代复杂性提高到$ \ MATHCAL {O }(\ epsilon^{ - 2 \ max \ {2 \ theta,1 \}})$。据我们所知,这是第一种非平滑nonconvex-concave问题的可证明有效的算法,它可以实现最佳迭代复杂性$ \ MATHCAL {o}(\ epsilon^{ - 2})$,如果$ \ theta \ 0,1/2] $。作为副产品,我们讨论了不同的平稳性概念并定量澄清它们的关系,这可能具有独立的兴趣。从经验上,我们说明了拟议的平滑PLDA在变体正规化WassErstein分布在鲁棒优化问题上的有效性。
translated by 谷歌翻译
本文研究了关于Riemannian流形的大规模优化问题,其目标函数是负面概要损失的有限总和。这些问题在各种机器学习和信号处理应用中出现。通过在歧管环境中引入Fisher信息矩阵的概念,我们提出了一种新型的Riemannian自然梯度方法,可以将其视为自然梯度方法的自然扩展,从欧几里得环境到歧管设置。我们在标准假设下建立了我们提出的方法的几乎纯净的全球融合。此外,我们表明,如果损失函数满足某些凸度和平稳性条件,并且输入输出图满足了雅各布稳定条件,那么我们提出的方法享有局部线性 - 或在Riemannian jacobian的Lipschitz连续性下,输入输出图,甚至二次 - 收敛速率。然后,我们证明,如果网络的宽度足够大,则可以通过具有批归归量的两层完全连接的神经网络来满足Riemannian Jacobian稳定性条件。这证明了我们的收敛率结果的实际相关性。对机器学习产生的应用的数值实验证明了该方法比最先进的方法的优势。
translated by 谷歌翻译
K-Subspaces(KSS)方法是用于子空间聚类的K-均值方法的概括。在这项工作中,我们介绍了KSS的本地收敛分析和恢复保证,假设数据是由Smari-random的子空间模型生成的,其中$ n $点是从$ k \ ge 2 $重叠子空间随机采样的。我们表明,如果KSS方法的初始分配位于真实聚类的邻域内,则它以高等的速率收敛,并在$ \ theta(\ log \ log \ log n)$迭代中找到正确的群集。此外,我们提出了一种基于阈值的基于内部产品的光谱方法来初始化,并证明它在该社区中产生了一个点。我们还提出了研究方法的数值结果,以支持我们的理论发展。
translated by 谷歌翻译
In this paper, we study the design and analysis of a class of efficient algorithms for computing the Gromov-Wasserstein (GW) distance tailored to large-scale graph learning tasks. Armed with the Luo-Tseng error bound condition~\citep{luo1992error}, two proposed algorithms, called Bregman Alternating Projected Gradient (BAPG) and hybrid Bregman Proximal Gradient (hBPG) enjoy the convergence guarantees. Upon task-specific properties, our analysis further provides novel theoretical insights to guide how to select the best-fit method. As a result, we are able to provide comprehensive experiments to validate the effectiveness of our methods on a host of tasks, including graph alignment, graph partition, and shape matching. In terms of both wall-clock time and modeling performance, the proposed methods achieve state-of-the-art results.
translated by 谷歌翻译
本文提出了一种以直接非凸起的方式解决社区检测和组同步问题的广义电力方法(GPM)。在随机组块模型(SGBM)下,理论分析表明该算法能够在$ O(n \ log ^ 2n)$ time中完全恢复地面真相,急剧优化了SEMIDEfinite编程(SDP)的基准方法O(n ^ {3.5})$时间。此外,参数的下限作为精确恢复GPM的必要条件。新界违反了随机块模型(SBM)下纯社区检测的信息 - 理论阈值,从而展示了我们在连续执行两个任务的琐碎的两级方法上的同时优化算法的优越性。我们还对GPM和SDP进行了数值实验,以证据和补充我们的理论分析。
translated by 谷歌翻译
组同步是指从嘈杂的成对测量中估计组元素的集合。这种非核解问题来自包括计算机视觉,机器人和冷冻电子显微镜的许多科学领域的大量关注。在本文中,我们专注于在不完全测量下的一般添加剂噪声模型的正交组同步问题,这比通常考虑的完整测量设置更多。从最优条件的透视提供正交组同步问题的特征以及投影梯度上升方法的固定点,其也称为广义功率方法(GPM)。值得注意的是,即使没有生成模型,这些结果仍然存在。同时,我们导出了对正交组同步问题的本地错误绑定属性,这对于不同算法的融合速率分析非常有用,并且可以是独立的兴趣。最后,我们在基于已建立的本地误差绑定属性的一般添加剂噪声模型下将GPM的线性收敛结果证明了GPM到全局最大化器。我们的理论会聚结果在若干确定性条件下持有,其可以覆盖具有对抗性噪声的某些情况,并且作为我们专门化以确定ERD \“OS-R”enyi测量图和高斯噪声的示例。
translated by 谷歌翻译
我们考虑从有限的嘈杂图形信号观察中学习图表的问题,其目标是找到图形信号的平滑表示。这种问题是通过在大型数据集中推断的关系结构,并且近年来广泛研究了这种问题。大多数现有方法专注于学习观察信号平滑的图表。但是,学习的图表容易过度拟合,因为它不会考虑未观察到的信号。为了解决这个问题,我们提出了一种基于分布稳健优化方法的新型图形学习模型,该模型旨在识别不仅提供了对观察信号中的不确定性的平滑表示的图表。在统计方面,我们建立了我们提出的模型的样本绩效保障。在优化方面,我们表明,在曲线图信号分布的温和假设下,我们提出的模型承认了平滑的非凸优化配方。然后,我们开发了一个预测的渐变方法来解决这一制定并建立其收敛保证。我们的配方在图形学习环境中提供了一个新的正则化视角。此外,综合和实世界数据的广泛数值实验表明,根据各种度量的观察信号的不同群体的模型具有比较不同的群体的较强的性能。
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译